There is no known useful formula that sets apart all of the prime numbers from composites. However, the distribution of primes, that is to say, the statistical behaviour of primes in the large, can be modeled. The first result in that direction is the prime number theorem, proven at the end of the 19th century, which says that the probability that a given, randomly chosen number n is prime is inversely proportional to its number of digits, or to the logarithm of n.
While a simple method, trial division quickly becomes impractical for testing large integers because the number of possible factors grows too rapidly as n increases. According to the prime number theorem explained below, the number of prime numbers less than
sqrt{n}
is approximately given by
sqrt{n} / ln(sqrt{n})
, so the algorithm may need up to this number of trial divisions to check the primality of n. For n = 1020, this number is 450 million—too large for many practical applications.
Modern primality tests for general numbers n can be divided into two main classes, probabilistic (or "Monte Carlo") and deterministic algorithms. The former merely "test" whether n is prime in the sense that they declare n to be (definitely) composite or "probably prime", the latter meaning that n may or may not be a prime number. Composite numbers that do pass a given primality test are referred to as pseudoprimes. For example, Fermat's primality test relies on Fermat's little theorem. This theorem says that for any prime number p and any integer a not divisible by p, ap − 1 − 1 is divisible by p. Thus, if an − 1 − 1 is not divisible by n, n cannot be prime. However, n may be composite even if this divisibility holds. In fact, there are infinitely many composite numbers n that pass the Fermat primality test for every choice of a that is coprime with n (Carmichael numbers), for example n = 561.
Useful stuff about primes: [ref: wiki/Prime_number]